AI Writing Is Everywhere — New Research Reveals Its Impact on Online Communication

6 min read AI-generated writing is reshaping online communication, with 24% of press releases, 18% of financial complaints, and 10% of job postings using LLMs. While AI boosts efficiency and accessibility, it also risks generic content and trust issues. Researchers warn of "model collapse" as AI trains on AI-written text, potentially degrading its quality. The shift is inevitable, but balancing authenticity and AI reliance will be key. March 24, 2025 12:34 AI Writing Is Everywhere — New Research Reveals Its Impact on Online Communication

The rapid expansion of AI-powered large language models (LLMs) is reshaping business, consumer, and institutional communication—and by extension, the internet itself. New research led by Weixin Liang at Stanford University has quantified just how prevalent AI writing has become across various domains, from corporate press releases to job postings and United Nations reports.

How Widespread Is AI Writing?

By the end of 2024, AI-assisted writing had become semi-ubiquitous, with:

  • 24% of business press releases containing AI-generated content

  • 18% of financial grievances written with AI assistance

  • 10% of job postings leveraging LLMs

  • 14% of UN press releases incorporating AI writing

This study, one of the largest empirical investigations of AI-generated content, reviewed over 300 million online documents from 2022 to 2024.

“We developed this method to quantify and compare frequencies of words more and less likely used by AI, tracking their prevalence over time across many types of text.” — Weixin Liang

The Dual Impact of AI-Generated Writing

The widespread adoption of LLM-generated content brings both benefits and risks:

Efficiency & Accessibility

  • AI helps non-native speakers express themselves clearly.

  • It streamlines corporate communication and makes content creation faster and more scalable.

  • In consumer finance, AI-assisted complaints have helped individuals better understand their rights, leading to successful resolutions.

Authenticity & Overuse Issues

  • AI-generated content risks making communication generic and homogeneous, reducing distinctive brand voices.

  • Job postings are increasingly AI-generated, with 10% of LinkedIn listings using LLMs—potentially misleading applicants.

  • Overuse may erode trust, with readers questioning “Who really wrote this?”

A Dangerous Feedback Loop: AI Training on AI Content

A major concern in the AI research community is the self-reinforcing cycle of AI models being trained on AI-generated content rather than human-created text.

  • "Model Collapse" Phenomenon: A 2023 study in Nature found that when LLMs are trained primarily on AI-generated text, their accuracy, nuance, and ability to generate meaningful content deteriorates over time.

  • This recursive AI feedback loop could make future AI less creative, less accurate, and more biased.

“If left unchecked, model collapse may render LLMs increasingly useless, producing ever more repetitive and less meaningful content.” — Liang

What’s Next for AI Writing?

As AI-generated writing moves from exception to norm, companies will need to balance efficiency with authenticity. Liang and his team are already exploring new research directions, including:

  • AI’s impact on financial communications

  • AI’s role in decision-making for high-stakes scenarios

One thing is clear: AI-generated content isn’t just a trend—it’s a seismic shift in how the world communicates.

User Comments (0)

Add Comment
We'll never share your email with anyone else.

img